Storage Redundancy (1 / 16): You are an engineer at Omega Corp, a company that heavily relies on Azure services. Currently, the company's Azure Storage account omegaStorageAccount within the resource group omegaRG is set up with Locally Redundant Storage (LRS) for data replication.
Recently, there have been growing concerns about the company's disaster recovery strategy. The business is expanding rapidly, serving customers globally, leading to a requirement for high availability, even in the event of a regional failure.
Recently, there have been several instances where the Omega Corp's global clients were unable to access the services due to regional disruptions. This is unacceptable for the business. Furthermore, your boss expressed a desire to have a kind of "backup" option in case the usual data access method fails. The system has to be persistent, to keep trying even if they encounter hiccups.
As an engineer at Omega Corp, your first task is to update the storage account to meet these new requirements:
Your second task is to modify the existing code below to make use of these changes:
var accountName = "omegaStorageAccount";
var primaryAccountUri = new Uri($"https://{accountName}.blob.core.windows.net/");
var blobClientOptions = new BlobClientOptions()
{
// Configure the retry policy to handle high bursts of user activity, transient faults, and network-related issues.
// Take into consideration the number of retry attempts (5), delay between retries (1s), maximum waiting time (100s),
// and the smart use of the secondary location.
Retry = { /* Options */ }
// More options
};
var blobServiceClient = new BlobServiceClient(primaryAccountUri, new DefaultAzureCredential(), blobClientOptions);
Bonus question: Your boss asks you when all of this will be completed, assuming coding will take you no time.
Answer:
To meet the new requirements, we need to change the replication option to Read-access geo-zone-redundant storage (GZRS-RA) to provide high availability and read access in case of regional outage. The Azure CLI command for this is:
az storage account update --name omegaStorageAccount --resource-group omegaRG --sku Standard_GZRS
Given the boss's clear directive to ensure persistent operation even during periods of high user activity, we need to implement retry logic in our application. We'll set the maximum number of retries to 5 and use the Exponential retry policy to gradually increase the delay between retries if they are necessary. We'll also set the GeoRedundantSecondaryUri property to automatically switch to the secondary URI if the primary is unavailable:
var accountName = "omegaStorageAccount";
var primaryAccountUri = new Uri($"https://{accountName}.blob.core.windows.net/");
var secondaryAccountUri = new Uri($"https://{accountName}-secondary.blob.core.windows.net/");
var blobClientOptions = new BlobClientOptions()
{
// Determines the policy for how the client should retry its requests upon encountering transient errors
Retry =
{
MaxRetries = 5,
Mode = RetryMode.Exponential,
Delay = TimeSpan.FromSeconds(1),
MaxDelay = TimeSpan.FromSeconds(60),
NetworkTimeout = TimeSpan.FromSeconds(100)
}
// If the secondary Uri response is 404, it won't be used again, indicating possible propagation delay.
// Otherwise, retries alternate between primary and secondary Uri.
GeoRedundantSecondaryUri = secondaryAccountUri
};
var blobServiceClient = new BlobServiceClient(primaryAccountUri, new DefaultAzureCredential(), blobClientOptions);
Bonus answer: It takes up to 72 hours for conversion to complete.